skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Kursun, Olcay"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This dataset includes 30 hyperspectral cloud images captured during the Summer and Fall of 2022 at Auburn University at Montgomery, Alabama, USA (Latitude N, Longitude W) using aResonon Pika XC2 Hyperspectral Imaging Camera. Utilizing the Spectronon software, the images were recorded with integration times between 9.0-12.0 ms, a frame rate of approximately 45 Hz, and a scan rate of 0.93 degrees per second. The images are calibrated to give spectral radiance in microflicks at 462 spectral bands in the 400 – 1000 nm wavelength region with a spectral resolution of 1.9 nm. A 17 m focal length objective lens was used giving a field of view equal to 30.8 degrees and an integration field of view of 0.71 mrad. These settings enable detailed spectral analysis of both dynamic cloud formations and clear sky conditions. Funded by NSF grant 2003740, this dataset is designed to advance understanding of diffuse solar radiation as influenced by cloud coverage.  The dataset is organized into 30 folders, each containing a hyperspectral image file (.bip), a header file (.hdr) with metadata, and an RGB render for visual inspection. Additional metadata, including date, time, central pixel azimuth, and altitude, are cataloged in an accompanying MS Excel file. A custom Python program is also provided to facilitate the reading and display of the HSI files.  The images can also be read and analyzed using the free version of the Spectron software available at https://resonon.com/software. To enrich this dataset, we have added a supplementary ZIP file containing multispectral (4-channel) image versions of the original hyperspectral scenes, together with the corresponding per-pixel photon flux and spectral radiance values computed from the full spectrum. These additions extend the dataset’s utility for machine learning and data fusion research by enabling comparative analysis between reduced-band multispectral imagery and full-spectrum hyperspectral data. The ExpandAI Challenge task is to develop models capable of predicting photon flux and radiance—derived from all 462 hyperspectral bands—using only the four multispectral channels. This benchmark aims to stimulate innovation in spectral information recovery, spectral-spatial inference, and physically informed deep learning for atmospheric imaging applications. 
    more » « less
  2. Lu, Ju (Ed.)
    Neurons throughout the neocortex exhibit selective sensitivity to particular features of sensory input patterns. According to the prevailing views, cortical strategy is to choose features that exhibit predictable relationship to their spatial and/or temporal context. Such contextually predictable features likely make explicit the causal factors operating in the environment and thus they are likely to have perceptual/behavioral utility. The known details of functional architecture of cortical columns suggest that cortical extraction of such features is a modular nonlinear operation, in which the input layer, layer 4, performs initial nonlinear input transform generating proto-features, followed by their linear integration into output features by the basal dendrites of pyramidal cells in the upper layers. Tuning of pyramidal cells to contextually predictable features is guided by the contextual inputs their apical dendrites receive from other cortical columns via long-range horizontal or feedback connections. Our implementation of this strategy in a model of prototypical V1 cortical column, trained on natural images, reveals the presence of a limited number of contextually predictable orthogonal basis features in the image patterns appearing in the column’s receptive field. Upper-layer cells generate an overcomplete Hadamard-like representation of these basis features: i.e., each cell carries information about all basis features, but with each basis feature contributing either positively or negatively in the pattern unique to that cell. In tuning selectively to contextually predictable features, upper layers perform selective filtering of the information they receive from layer 4, emphasizing information about orderly aspects of the sensed environment and downplaying local, likely to be insignificant or distracting, information. Altogether, the upper-layer output preserves fine discrimination capabilities while acquiring novel higher-order categorization abilities to cluster together input patterns that are different but, in some way, environmentally related. We find that to be fully effective, our feature tuning operation requires collective participation of cells across 7 minicolumns, together making up a functionally defined 150 μm diameter “mesocolumn.” Similarly to real V1 cortex, 80% of model upper-layer cells acquire complex-cell receptive field properties while 20% acquire simple-cell properties. Overall, the design of the model and its emergent properties are fully consistent with the known properties of cortical organization. Thus, in conclusion, our feature-extracting circuit might capture the core operation performed by cortical columns in their feedforward extraction of perceptually and behaviorally significant information. 
    more » « less
    Free, publicly-accessible full text available October 7, 2026
  3. The CloudPatch-7 Hyperspectral Dataset comprises a manually curated collection of hyperspectral images, focused on pixel classification of atmospheric cloud classes. This labeled dataset features 380 patches, each a 50x50 pixel grid, derived from 28 larger, unlabeled parent images approximately 5000x1500 pixels in size. Captured using the Resonon PIKA XC2 camera, these images span 462 spectral bands from 400 to 1000 nm. Each patch is extracted from a parent image ensuring that its pixels fall within one of seven atmospheric conditions: Dense Dark Cumuliform Cloud, Dense Bright Cumuliform Cloud, Semi-transparent Cumuliform Cloud, Dense Cirroform Cloud, Semi-transparent Cirroform Cloud, Clear Sky - Low Aerosol Scattering (dark), and Clear Sky - Moderate to High Aerosol Scattering (bright). Incorporating contextual information from surrounding pixels enhances pixel classification into these 7 classes, making this dataset a valuable resource for spectral analysis, environmental monitoring, atmospheric science research, and testing machine learning applications that require contextual data. Parent images are very big in size, but they can be made available upon request. 
    more » « less
  4. The concept of stimulus feature tuning isfundamental to neuroscience. Cortical neurons acquire their feature-tuning properties by learning from experience and using proxy signs of tentative features’ potential usefulness that come from the spatial and/or temporal context in which these features occur. According to this idea, local but ultimately behaviorally useful features should be the ones that are predictably related to other such features either preceding them in time or taking place side-by-side with them. Inspired by this idea, in this paper, deep neural networks are combined with Canonical Correlation Analysis (CCA) for feature extraction and the power of the features is demonstrated using unsupervised cross-modal prediction tasks. CCA is a multi-view feature extraction method that finds correlated features across multiple datasets (usually referred to as views or modalities). CCA finds linear transformations of each view such that the extracted principal components, or features, have a maximal mutual correlation. CCA is a linear method, and the features are computed by a weighted sum of each view's variables. Once the weights are learned, CCA can be applied to new examples and used for cross-modal prediction by inferring the target-view features of an example from its given variables in a source (query) view. To test the proposed method, it was applied to the unstructured CIFAR-100 dataset of 60,000 images categorized into 100 classes, which are further grouped into 20 superclasses and used to demonstrate the mining of image-tag correlations. CCA was performed on the outputs of three pre-trained CNNs: AlexNet, ResNet, and VGG. Taking advantage of the mutually correlated features extracted with CCA, a search for nearest neighbors was performed in the canonical subspace common to both the query and the target views to retrieve the most matching examples in the target view, which successfully predicted the superclass membership of the tested views without any supervised training. 
    more » « less
  5. A novel hyperspectral image classification algorithm is proposed and demonstrated on benchmark hyperspectral images. We also introduce a hyperspectral sky imaging dataset that we are collecting for detecting the amount and type of cloudiness. The algorithm designed to be applied to such systems could improve the spatial and temporal resolution of cloud information vital to understanding Earth’s climate. We discuss the nature of our HSI-Cloud dataset being collected and an algorithm we propose for processing the dataset using a categorical-boosting method. The proposed method utilizes multiple clusterings to augment the dataset and achieves higher pixel classification accuracy. Creating categorical features via clustering enriches the data representation and improves boosting ensembles. For the experimental datasets used in this paper, gradient boosting methods performed favorably to the benchmark algorithms. 
    more » « less
  6. Free, publicly-accessible full text available March 22, 2026
  7. Implementing local contextual guidance principles in a single-layer CNN architecture, we propose an efficient algorithm for developing broad-purpose representations (i.e., representations transferable to new tasks without additional training) in shallow CNNs trained on limited-size datasets. A contextually guided CNN (CG-CNN) is trained on groups of neighboring image patches picked at random image locations in the dataset. Such neighboring patches are likely to have a common context and therefore are treated for the purposes of training as belonging to the same class. Across multiple iterations of such training on different context-sharing groups of image patches, CNN features that are optimized in one iteration are then transferred to the next iteration for further optimization, etc. In this process, CNN features acquire higher pluripotency, or inferential utility for any arbitrary classification task. In our applications to natural images and hyperspectral images, we find that CG-CNN can learn transferable features similar to those learned by the first layers of the well-known deep networks and produce favorable classification accuracies. 
    more » « less